conformity assessment
Addressing the Regulatory Gap: Moving Towards an EU AI Audit Ecosystem Beyond the AIA by Including Civil Society
Hartmann, David, de Pereira, José Renato Laranjeira, Streitbörger, Chiara, Berendt, Bettina
The European legislature has proposed the Digital Services Act (DSA) and Artificial Intelligence Act (AIA) to regulate platforms and Artificial Intelligence (AI) products. We review to what extent third-party audits are part of both laws and to what extent access to models and data is provided. By considering the value of third-party audits and third-party data access in an audit ecosystem, we identify a regulatory gap in that the Artificial Intelligence Act does not provide access to data for researchers and civil society. Our contributions to the literature include: (1) Defining an AI audit ecosystem that incorporates compliance and oversight. (2) Highlighting a regulatory gap within the DSA and AIA regulatory framework, preventing the establishment of an AI audit ecosystem. (3) Emphasizing that third-party audits by research and civil society must be part of that ecosystem and demand that the AIA include data and model access for certain AI products. We call for the DSA to provide NGOs and investigative journalists with data access to platforms by delegated acts and for adaptions and amendments of the AIA to provide third-party audits and data and model access at least for high-risk systems to close the regulatory gap. Regulations modeled after European Union AI regulations should enable data access and third-party audits, fostering an AI audit ecosystem that promotes compliance and oversight mechanisms.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Netherlands > South Holland > Rotterdam (0.04)
- Europe > Germany > Berlin (0.04)
- (9 more...)
- Overview (0.66)
- Research Report (0.50)
- Media > News (1.00)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)
What the UK's six AI principles mean for financial services
The regulatory framework is to be built around six cross-sector principles that the UK government outlined in July and focus on'high risk' AI – in an approach to regulating AI that varies significantly to that proposed in the EU with the planned AI Act. Formal regulatory guidance is likely to be issued in the UK to guide firms' approach to using AI in due course, but we have assessed each of the six principles against the views expressed by regulators via the AI Public Private Forum (AIPPF) – a forum set up by the Bank of England and the Financial Conduct Authority (FCA) – in an effort to understand what is likely to be expected of firms. Undertaking an AI audit and assessing how AI use aligns with business objectives are just two of the measures that firms should consider to best prepare themselves for the new regulatory regime. A common theme across both the EU and UK proposals is ensuring the safe use of AI. Detailed UK guidance on risk categorisation is limited at this stage, but financial services firms would be wise to undertake an audit of their AI use and create an inventory of all AI applications that are in use or development.
The European legal approach to artificial intelligence: what will it mean for businesses?
The European Union (hereinafter "The EU") often leads the way in establishing comprehensive legal frameworks regarding novel issues. As a reminder, it was a pioneer in the area of data protection through its adoption of the EU Data Protection Directive as early as 1995, and more recently through its enactment of the General Data Protection Regulation (GDPR) in 2016, the most severe international law in the field of data protection. Similarly, the EU is currently pushing for the adoption of a detailed regulation for artificial intelligence (hereinafter "AI") systems, the Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (hereinafter "the EU AI Act draft"). First presented in April 2021 by the European Commission, this law is a breakthrough endeavor which will surely have many repercussions, on the EU's level, but also internationally. Currently, in the sector of AI, the EU AI Act is a flagship initiative, which seeks to ensure the safety and trustworthiness of high-risk AI systems developed and used in the EU. It is the first law to solely address AI, and it is expected to become a "GDPR for AI".
- Law (1.00)
- Government > Regional Government > Europe Government (1.00)
EU: Proposed Artificial Intelligence Law Could Affect Employers Globally
Companies with employees in the European Union (EU) could be affected by a landmark proposal to regulate the use of artificial intelligence (AI) across the region. The EU Artificial Intelligence Act, now working its way through the legislative process, is expected to shape technology and standards worldwide. The act comprises a broad set of rules seeking to regulate the use of AI across industries and social activities, noted Jean-François Gerard, a Brussels-based attorney for Freshfields Bruckhaus Deringer. The AI regulation proposes a sliding scale of rules based on risk: the higher the perceived risk, the stricter the rule, he said. The proposal would classify different AI applications as unacceptable, high, limited or minimal risks, according to a client briefing Gerard helped produce.
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.37)
The Bible of Artificial Intelligence (AI) on LinkedIn: GitHub - hpcaitech/FastFold: Optimizing Protein Structure Prediction
Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation «The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions.
- Law (0.60)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.40)
Will evolving regulations stymie AI innovations?
"A model is as good as the underlying data," said Jayachandran Ramachandran, SVP of Artificial Intelligence Labs at Course5 Intelligence during his MLDS talk "Will evolving regulations stymie AI innovations? He discussed how industries and governments recognise this problem and develop regulations and recommendations. He also touched on the recommendations and implications crelated to European Union's AI regulations draft. Today, most countries have an AI policy and strategies in place. The EU is at the forefront of AI regulations and drafts. "The EU draft in 2021 is acting as a benchmark for other countries," Ramachandran noted. The draft seeks to ensure the AI policy is human-centric, sustainable, secure, inclusive and trustworthy. Additionally, the draft focuses on a seamless transition of AI from the lab to the market. Any system deployed for the users based in the EU will be under the scope of this AI regulation. If the consumers are based outside the EU, they will not be held ...
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation
Mokander, Jakob, Axente, Maria, Casolari, Federico, Floridi, Luciano
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > California (0.04)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (1.00)
- Banking & Finance (1.00)
MedTech Europe calls for urgent clarification of EU artificial intelligence proposal
MedTech Europe has called for the urgent clarification of a proposed artificial intelligence regulation because it uses an overly broad definition and is misaligned with existing regulatory frameworks. The European Commission outlined its plans to regulate AI, including medical devices and in vitro diagnostics that feature the technology, earlier this year. Under the proposal, the European Union would require high-risk AI systems to "comply with certain mandatory requirements" before coming to market. The Commission acknowledged a risk of overlap with existing regulations but envisioned the framework complementing requirements such as the Medical Devices Regulation. However, MedTech Europe contends the proposal falls short of that vision.
- North America > United States (0.16)
- Europe > Poland (0.05)
- Asia > Japan (0.05)
- Health & Medicine > Health Care Technology (1.00)
- Government > Regional Government > Europe Government (0.70)
What the draft European Union AI regulations mean for business
As artificial intelligence (AI) becomes increasingly embedded in the fabric of business and our everyday lives, both corporations and consumer-advocacy groups have lobbied for clearer rules to ensure that it is used fairly. In May, the European Union became the first governmental body in the world to issue a comprehensive response in the form of draft regulations aimed specifically at the development and use of AI. The proposed regulations would apply to any AI system used or providing outputs within the European Union, signaling implications for organizations around the world. Our research shows that many organizations still have a lot of work to do to prepare themselves for this regulation and address the risks associated with AI more broadly. In 2020, only 48 percent of organizations reported that they recognized regulatory-compliance risks, and even fewer (38 percent) reported actively working to address them.
- Europe (0.94)
- North America > United States (0.29)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.84)
STOA meets its International Advisory Board to discuss the Artificial Intelligence Act
Written by Philip Boucher and Carl Pierer. The European Commission published the much-anticipated Artificial Intelligence Act (AIA), an ambitious cross-sectoral attempt to regulate artificial intelligence (AI) applications on 21 April 2021. Its aim is to ensure that all European citizens can trust AI by providing proportionate and flexible rules – harmonised across the single market – to address the specific risks posed by AI systems and set the highest standards worldwide. The proposal sets out a risk-based approach to regulating AI applications: those presenting an'unacceptable risk' would be banned, those presenting a'high-risk' would be subjected to additional requirements before entering the market, and others, such as chatbots and'deep fakes', would be subject to new transparency requirements. Applications presenting'low or minimal risk' – the vast majority of AI applications – could enter the market without restrictions, although voluntary codes of conduct may be developed. Other proposed measures include a European AI Board to monitor implementation and regulatory sandboxes to facilitate innovation.
- Law (1.00)
- Government > Regional Government (0.35)